HaptiProjection: Multimodal Mobile Information Discovery

نویسندگان

  • Simon Robinson
  • Matt Jones
چکیده

Handheld projectors are steadily emerging as a potential display method of the future, offering many opportunities for interesting interactions with the world around us. However, to date little attention has been focused how people might move from mobile device usage to projection of interactive content. In this position paper we address this by proposing a method for location-based content discovery that helps to merge the physical and digital spaces we live in. We describe an early prototype, developed to demonstrate interaction concepts, and summarise the challenges and future developments needed for this type of system.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cluster Based Cross Layer Intelligent Service Discovery for Mobile Ad-Hoc Networks

The ability to discover services in Mobile Ad hoc Network (MANET) is a major prerequisite. Cluster basedcross layer intelligent service discovery for MANET (CBISD) is cluster based architecture, caching ofsemantic details of services and intelligent forwarding using network layer mechanisms. The cluster basedarchitecture using semantic knowledge provides scalability and accuracy. Also, the mini...

متن کامل

A Testbed for Evaluating Multimodal Dialogue Systems for Small Screen Devices

This paper discusses the requirements for developing a multimodal spoken dialogue system for mobile phone applications. Since visual output as part of the multimodal system is limited through the restricted screen size of mobile phones, research in the field of information visualisation for small screen devices are discussed and combinations of these techniques with spoken output are sketched. ...

متن کامل

MATCH: An Architecture for Multimodal Dialogue Systems

Mobile interfaces need to allow the user and system to adapt their choice of communication modes according to user preferences, the task at hand, and the physical and social environment. We describe a multimodal application architecture which combines finite-state multimodal language processing, a speech-act based multimodal dialogue manager, dynamic multimodal output generation, and user-tailo...

متن کامل

Online Object Categorization Using Multimodal Information Autonomously Acquired by a Mobile Robot

In this paper, we propose a robot that acquires multimodal information, i.e., visual, auditory, and haptic information, fully autonomously using its embodiment. We also propose batch and online algorithms for multimodal categorization based on the acquired multimodal information and partial words given by human users. To obtain multimodal information, the robot detects an object on a flat surfa...

متن کامل

M3I: A Framework for Mobile Multimodal Interaction

We present M3I, an extensive multimodal interaction framework for mobile devices, which simplifies and accelerates the creation of multimodal applications for prototyping and research. It provides an abstraction of information representations in different communication channels, unifies access to implicit and explicit information, and wires together the logic behind context-sensitive modality s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010